Ziya LLaMA 13B Pretrain V1
Gpl-3.0
A large-scale pre-trained model with 13 billion parameters based on the LLaMa architecture, optimized for Chinese tokenization, completing 110 billion tokens of incremental pre-training in Chinese and English, significantly improving Chinese generation and comprehension capabilities
Large Language Model
Transformers Supports Multiple Languages